In recent years, artificial intelligence technology has developed rapidly, and the automobile industry has launched autonomous driving systems. However, autonomous driving systems installed in unmanned vehicles still have room to be strengthened in terms of cybersecurity. Many potential attacks may lead to traffic accidents and expose passengers to danger. We explored two potential attacks against autonomous driving systems: stroboscopic attacks and colored light illumination attacks, and analyzed the impact of these attacks on the accuracy of traffic sign recognition based on deep learning models, such as convolutional neural networks (CNNs) and You Only Look Once (YOLO)v5. We used the German Traffic Sign Recognition Benchmark dataset to train CNN and YOLOv5 to establish a machine learning model, and then conducted various attacks on traffic signs, including the following: LED strobe, various colors of LED light illumination and other attacks. By setting up an experimental environment, we tested how LED lights with different flashing frequencies and light color changes affect the recognition accuracy of the machine learning model. From the experimental results, we found that, compared to YOLOv5, CNN has better resilience in resisting the above attacks. In addition, different attack methods will interfere with the original machine learning model to some extent, affecting the ability of self-driving cars to recognize traffic signs. This may cause the self-driving system to fail to detect the presence of traffic signs, or make incorrect decisions about identification results.
Loading....